6,444 research outputs found

    Extended transition rates and lifetimes in Al I and Al II from systematic multiconfiguration calculations

    Full text link
    Multiconfiguration Dirac-Hartree-Fock (MCDHF) and relativistic configuration interaction (RCI) calculations were performed for 28 and 78 states in neutral and singly ionized aluminium, respectively. In Al I, the configurations of interest are 3s2nl3s^2nl for n=3,4,5n=3,4,5 with l=0l=0 to 44, as well as 3s3p23s3p^2 and 3s26l3s^26l for l=0,1,2l=0,1,2. In Al II, the studied configurations are, besides the ground configuration 3s23s^2, 3snl3snl with n=3n=3 to 66 and l=0l=0 to 55, 3p23p^2, 3s7s3s7s, 3s7p3s7p and 3p3d3p3d. Valence and core-valence electron correlation effects are systematically accounted for through large configuration state function (CSF) expansions. Calculated excitation energies are found to be in excellent agreement with experimental data from the NIST database. Lifetimes and transition data for radiative electric dipole (E1) transitions are given and compared with results from previous calculations and available measurements, for both Al I and Al II. The computed lifetimes of Al I are in very good agreement with the measured lifetimes in high-precision laser spectroscopy experiments. The present calculations provide a substantial amount of updated atomic data, including transition data in the infrared region. This is particularly important since the new generation of telescopes are designed for this region. There is a significant improvement in accuracy, in particular for the more complex system of neutral Al I. The complete tables of transition data are available

    A nod in the wrong direction : Does nonverbal feedback affect eyewitness confidence in interviews?

    Get PDF
    Eyewitnesses can be influenced by an interviewer's behaviour and report information with inflated confidence as a result. Previous research has shown that positive feedback administered verbally can affect the confidence attributed to testimony, but the effect of non-verbal influence in interviews has been given little attention. This study investigated whether positive or negative non-verbal feedback could affect the confidence witnesses attribute to their responses. Participants witnessed staged CCTV footage of a crime scene and answered 20 questions in a structured interview, during which they were given either positive feedback (a head nod), negative feedback (a head shake) or no feedback. Those presented with positive non-verbal feedback reported inflated confidence compared with those presented with negative non-verbal feedback regardless of accuracy, and this effect was most apparent when participants reported awareness of the feedback. These results provide further insight into the effects of interviewer behaviour in investigative interviewsPeer reviewedFinal Accepted Versio

    Distinguishing Posed and Spontaneous Smiles by Facial Dynamics

    Full text link
    Smile is one of the key elements in identifying emotions and present state of mind of an individual. In this work, we propose a cluster of approaches to classify posed and spontaneous smiles using deep convolutional neural network (CNN) face features, local phase quantization (LPQ), dense optical flow and histogram of gradient (HOG). Eulerian Video Magnification (EVM) is used for micro-expression smile amplification along with three normalization procedures for distinguishing posed and spontaneous smiles. Although the deep CNN face model is trained with large number of face images, HOG features outperforms this model for overall face smile classification task. Using EVM to amplify micro-expressions did not have a significant impact on classification accuracy, while the normalizing facial features improved classification accuracy. Unlike many manual or semi-automatic methodologies, our approach aims to automatically classify all smiles into either `spontaneous' or `posed' categories, by using support vector machines (SVM). Experimental results on large UvA-NEMO smile database show promising results as compared to other relevant methods.Comment: 16 pages, 8 figures, ACCV 2016, Second Workshop on Spontaneous Facial Behavior Analysi

    Deception and self-awareness

    Get PDF
    This paper presents a study conducted for the Shades of Grey EPSRC research project (EP/H02302X/1), which aims to develop a suite of interventions for identifying terrorist activities. The study investigated the body movements demonstrated by participants while waiting to be interviewed, in one of two conditions: preparing to lie or preparing to tell the truth. The effect of self-awareness was also investigated, with half of the participants sitting in front of a full length mirror during the waiting period. The other half faced a blank wall. A significant interaction was found for the duration of hand/arm movements between the deception and self-awareness conditions (F=4.335, df=1;76, p<0.05). Without a mirror, participants expecting to lie spent less time moving their hands than those expecting to tell the truth; the opposite was seen in the presence of a mirror. This finding indicates a new research area worth further investigation

    Spatio-Temporal Sentiment Hotspot Detection Using Geotagged Photos

    Full text link
    We perform spatio-temporal analysis of public sentiment using geotagged photo collections. We develop a deep learning-based classifier that predicts the emotion conveyed by an image. This allows us to associate sentiment with place. We perform spatial hotspot detection and show that different emotions have distinct spatial distributions that match expectations. We also perform temporal analysis using the capture time of the photos. Our spatio-temporal hotspot detection correctly identifies emerging concentrations of specific emotions and year-by-year analyses of select locations show there are strong temporal correlations between the predicted emotions and known events.Comment: To appear in ACM SIGSPATIAL 201

    Word contexts enhance the neural representation of individual letters in early visual cortex

    No full text
    Visual context facilitates perception, but how this is neurally implemented remains unclear. One example of contextual facilitation is found in reading, where letters are more easily identified when embedded in a word. Bottom-up models explain this word advantage as a post-perceptual decision bias, while top-down models propose that word contexts enhance perception itself. Here, we arbitrate between these accounts by presenting words and nonwords and probing the representational fidelity of individual letters using functional magnetic resonance imaging. In line with top-down models, we find that word contexts enhance letter representations in early visual cortex. Moreover, we observe increased coupling between letter information in visual cortex and brain activity in key areas of the reading network, suggesting these areas may be the source of the enhancement. Our results provide evidence for top-down representational enhancement in word recognition, demonstrating that word contexts can modulate perceptual processing already at the earliest visual regions

    Emotion processing in infancy: specificity in risk for social anxiety and associations with two year outcomes

    Get PDF
    The current study examined the specificity of patterns of responding to high and low intensity negative emotional expressions of infants of mothers with social phobia, and their association with child outcomes at two years of age. Infants of mothers with social phobia, generalised anxiety disorder (GAD) or no history of anxiety were shown pairs of angry and fearful emotional expressions at 10 weeks of age. Symptoms of social withdrawal, anxiety and sleep problems were assessed at two years of age. Only infants of mothers with social phobia showed a tendency to look away from high intensity fear faces; however infants of mothers with both social phobia and GAD showed a bias towards high intensity angry faces. Among the offspring of mothers with social phobia, anxiety symptoms at two years of age were associated with a preference for high intensity fear faces in infancy. The reverse pattern was found amongst the offspring of non-anxious mothers. These findings suggest a possible specific response to emotional expressions among the children of mothers with social phobia

    Recognizing Emotions in a Foreign Language

    Get PDF
    Expressions of basic emotions (joy, sadness, anger, fear, disgust) can be recognized pan-culturally from the face and it is assumed that these emotions can be recognized from a speaker's voice, regardless of an individual's culture or linguistic ability. Here, we compared how monolingual speakers of Argentine Spanish recognize basic emotions from pseudo-utterances ("nonsense speech") produced in their native language and in three foreign languages (English, German, Arabic). Results indicated that vocal expressions of basic emotions could be decoded in each language condition at accuracy levels exceeding chance, although Spanish listeners performed significantly better overall in their native language ("in-group advantage"). Our findings argue that the ability to understand vocally-expressed emotions in speech is partly independent of linguistic ability and involves universal principles, although this ability is also shaped by linguistic and cultural variables

    Inversion improves the recognition of facial expression in thatcherized images

    Get PDF
    The Thatcher illusion provides a compelling example of the face inversion effect. However, the marked effect of inversion in the Thatcher illusion contrasts to other studies that report only a small effect of inversion on the recognition of facial expressions. To address this discrepancy, we compared the effects of inversion and thatcherization on the recognition of facial expressions. We found that inversion of normal faces caused only a small reduction in the recognition of facial expressions. In contrast, local inversion of facial features in upright thatcherized faces resulted in a much larger reduction in the recognition of facial expressions. Paradoxically, inversion of thatcherized faces caused a relative increase in the recognition of facial expressions. Together, these results suggest that different processes explain the effects of inversion on the recognition of facial expressions and on the perception of the Thatcher illusion. The grotesque perception of thatcherized images is based on a more orientation-sensitive representation of the face. In contrast, the recognition of facial expression is dependent on a more orientation-insensitive representation. A similar pattern of results was evident when only the mouth or eye region was visible. These findings demonstrate that a key component of the Thatcher illusion is to be found in orientation-specific encoding of the features of the face
    corecore